Rademacher Penalty Optimization with the Generalized Ramp Loss
نویسنده
چکیده
where F = {yh(x) : ∀ h ∈ H} and φ(h(x), y) is the loss function of interest. The labeled training sample S = {(xi, yi) | i ∈ {1, ..., n}, xi ∈ R, yi ∈ {±1}} is assumed to be generated from an independent, identically distributed (IID) random process. Given each Rademacher random variable σi ∈ {±1} with uniform odds, a realization of the Rademacher random variables σ amounts to a partition of the training set into two approximately equal-sized subsets with high probability. Reexpressing (1) to reflect this explicitly yields
منابع مشابه
Ramp loss linear programming support vector machine
The ramp loss is a robust but non-convex loss for classification. Compared with other non-convex losses, a local minimum of the ramp loss can be effectively found. The effectiveness of local search comes from the piecewise linearity of the ramp loss. Motivated by the fact that the `1-penalty is piecewise linear as well, the `1-penalty is applied for the ramp loss, resulting in a ramp loss linea...
متن کاملRamp Rates Control of Wind Power Output Using a Storage System and Gaussian Processes
This paper suggests the operation polices of storage devices to limiting ramp rates of wind power output. Operation polices are based on the optimization framework of linear programming under the assumption that there is a penalty cost for deviating outside ramp rate limits. Policies decide optimal storage operations to minimize the total penalty costs incurred when violating ramp rate limits. ...
متن کاملSEQUENTIAL PENALTY HANDLING TECHNIQUES FOR SIZING DESIGN OF PIN-JOINTED STRUCTURES BY OBSERVER-TEACHER-LEARNER-BASED OPTIMIZATION
Despite comprehensive literature works on developing fitness-based optimization algorithms, their performance is yet challenged by constraint handling in various engineering tasks. The present study, concerns the widely-used external penalty technique for sizing design of pin-jointed structures. Observer-teacher-learner-based optimization is employed here since previously addressed by a number ...
متن کاملModel selection using Rademacher Penalization
In this paper we describe the use of Rademacher penalization for model selection. As in Vapnik's Guaranteed Risk Minimization (GRM), Rademacher penalization attemps to balance the complexity of the model with its t to the data by minimizing the sum of the training error and a penalty term, which is an upper bound on the absolute di erence between the training error and the generalization error....
متن کاملRademacher penalties and structural risk minimization
We suggest a penalty function to be used in various problems of structural risk minimization. This penalty is data dependent and is based on the sup-norm of the so called Rademacher process indexed by the underlying class of functions (sets). The standard complexity penalties, used in learning problems and based on the VCdimensions of the classes, are conservative upper bounds (in a probabilist...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009